Goto

Collaborating Authors

 Raleigh


Viral Ottawa Senators fan blamed for team's 0-2 playoff start banished to Taiwan

FOX News

A piece of the UFC White House event's setup is sitting in Pennsylvania Amish country Edward Cabrera's strikeout prop is the play as struggling Phillies face surging Cubs today Nuggets vs Timberwolves Game 3 pick hinges on Jaden McDaniels calling out Denver's entire defense Charles Barkley was disgusted by Magic's highly questionable pregame handshake ChatGPT predicted the first round of the NFL Draft and here's what it said Curt Cignetti was so focused this offseason, he turned down all external requests: 'I'm 95% football' Former MLB owner claims'despicable' San Francisco Giants are the reason the A's left Oakland Trump weighs in on Iran's internal power struggle and Strait of Hormuz control Hasan Piker justifies'social murder' of CEO Fox News celebrates'Bring Your Kids to Work Day' Trump says there's'no time frame' to secure Iran deal Iranian activist praises Trump's intervention after female protesters saved from execution OutKick Viral Ottawa Senators fan blamed for team's 0-2 playoff start banished to Taiwan US men's hockey team faces media backlash after White House visit, SOTU appearance Fox News contributors Marc Thiessen and Ari Fleischer discuss the U.S. men's hockey team's White House visit and State of the Union appearance, breaking down media reactions to the events on'Fox & Friends.' When it comes to sports superstitions, they don't make them much more militant than teams and players in the Stanley Cup Playoffs . Everyone on the team has to grow playoff beards, and if the team ate at a certain restaurant then had a great game, guess where you're eating for the next two months during home games? Hell, even Sidney Crosby has been wearing the same jockstrap for 20 years because of how superstitious he was (okay, that was TMI, I apologize). Suffice it to say, teams can get a little paranoid when it comes to luck and bad omens in the playoffs, which is why the Ottawa Senators had to act accordingly after their team fell down 0-2 against the Carolina Hurricanes.


Heterogeneity-Aware Personalized Federated Learning for Industrial Predictive Analytics

Hu, Yuhan, Fang, Xiaolei

arXiv.org Machine Learning

Federated prognostics enable clients (e.g., companies, factories, and production lines) to collaboratively develop a failure time prediction model while keeping each client's data local and confidential. However, traditional federated models often assume homogeneity in the degradation processes across clients, an assumption that may not hold in many industrial settings. To overcome this, this paper proposes a personalized federated prognostic model designed to accommodate clients with heterogeneous degradation processes, allowing them to build tailored prognostic models. The prognostic model iteratively facilitates the underlying pairwise collaborations between clients with similar degradation patterns, which enhances the performance of personalized federated learning. To estimate parameters jointly using decentralized datasets, we develop a federated parameter estimation algorithm based on proximal gradient descent. The proposed approach addresses the limitations of existing federated prognostic models by simultaneously achieving model personalization, preserving data privacy, and providing comprehensive failure time distributions. The superiority of the proposed model is validated through extensive simulation studies and a case study using the turbofan engine degradation dataset from the NASA repository.


Time-Warping Recurrent Neural Networks for Transfer Learning

Hirschi, Jonathon

arXiv.org Machine Learning

Dynamical systems describe how a physical system evolves over time. Physical processes can evolve faster or slower in different environmental conditions. We use time-warping as rescaling the time in a model of a physical system. This thesis proposes a new method of transfer learning for Recurrent Neural Networks (RNNs) based on time-warping. We prove that for a class of linear, first-order differential equations known as time lag models, an LSTM can approximate these systems with any desired accuracy, and the model can be time-warped while maintaining the approximation accuracy. The Time-Warping method of transfer learning is then evaluated in an applied problem on predicting fuel moisture content (FMC), an important concept in wildfire modeling. An RNN with LSTM recurrent layers is pretrained on fuels with a characteristic time scale of 10 hours, where there are large quantities of data available for training. The RNN is then modified with transfer learning to generate predictions for fuels with characteristic time scales of 1 hour, 100 hours, and 1000 hours. The Time-Warping method is evaluated against several known methods of transfer learning. The Time-Warping method produces predictions with an accuracy level comparable to the established methods, despite modifying only a small fraction of the parameters that the other methods modify.


Uncertainty Quantification Via the Posterior Predictive Variance

Chaudhuri, Sanjay, Dustin, Dean, Clarke, Bertrand

arXiv.org Machine Learning

Abstract: We use the law of total variance to generate multiple expansions for the posterior predictive variance. These expansions are sums of terms involving conditional expectations and conditional variances and provide a quantification of the sources of predictive uncertainty. Since the posterior predictive variance is fixed given the model, it represents a constant quantity that is conserved over these expansions. The terms in the expansions can be assessed in absolute or relative sense to understand the main contributors to the length of prediction intervals. We quantify the term-wise uncertainty across expansions varying in the number of terms and the order of conditionates. In particular, given that a specific term in one expansion is small or zero, we identify the other terms in other expansions that must also be small or zero. We illustrate this approach to predictive model assessment in several well-known models. The Setting and Intuition Everyone uses prediction intervals (PI's) but few examine their structure or more precisely how they should be interpreted in the context of a model with multiple components. Often PI's seem overconfident (too narrow) or useless (too wide). Both frequentist and Bayesian practitioners routinely report PI's.


Inventor Beulah Louise Henry's unstoppable rise to becoming 'Lady Edison'

Popular Science

With 49 patents and over 100 inventions, Henry built an empire catering to women and children. Beulah Louise Henry invented everything from ice cream makers to radio dolls--despite a world that didn't take her seriously. Breakthroughs, discoveries, and DIY tips sent six days a week. Beulah Louise Henry was just nine years old when she came up with her first invention in 1896, a device that allowed a man to tip his hat without ever putting down his newspaper. By her death in 1973, at the age of 85, she'd come up with so many more--a doll with eyes that changed color with the press of a button, a sewing machine without a bobbin (a threaded spool that slowed down work because it had to be frequently refilled), a clock designed to help kids learn to tell time, and others--that the press even dubbed Henry "Lady Edison."






Privately Learning Decision Lists and a Differentially Private Winnow

Bun, Mark, Fang, William

arXiv.org Machine Learning

We give new differentially private algorithms for the classic problems of learning decision lists and large-margin halfspaces in the PAC and online models. In the PAC model, we give a computationally efficient algorithm for learning decision lists with minimal sample overhead over the best non-private algorithms. In the online model, we give a private analog of the influential Winnow algorithm for learning halfspaces with mistake bound polylogarithmic in the dimension and inverse polynomial in the margin. As an application, we describe how to privately learn decision lists in the online model, qualitatively matching state-of-the art non-private guarantees.